memory parallelism

memory parallelism
параллелизм памяти
возможность одновременной записи/чтения в/из нескольких ячеек памяти. Это увеличивает пропускную способность памяти и соответственно повышает производительность системы
см. тж. memory bandwidth

Англо-русский толковый словарь терминов и сокращений по ВТ, Интернету и программированию. . 1998-2007.

Игры ⚽ Поможем написать курсовую

Смотреть что такое "memory parallelism" в других словарях:

  • Memory-level parallelism — or MLP is a term in computer architecture referring to the ability to have pending multiple memory operations, in particular cache misses or translation lookaside buffer misses, at the same time. In a single processor, MLP may be considered a… …   Wikipedia

  • Memory level parallelism — or MLP is a term in computer architecture referring to the ability to have pending multiple memory operations, in particular cache misses, at the same time.MLP may be considered a form of ILP, instruction level parallelism. However, ILP is often… …   Wikipedia

  • Memory disambiguation — is a set of techniques employed by high performance out of order execution microprocessors that execute memory access instructions (loads and stores) out of program order. The mechanisms for performing memory disambiguation, implemented using… …   Wikipedia

  • Memory coherence — is an issue that affects the design of computer systems in which two or more processors or cores share a common area of memory.[1][2][3][4] In a uniprocessor system (whereby, in today s terms, there exists only one core), there is only one… …   Wikipedia

  • Memory architecture — describes the methods used to implement electronic computer data storage in a manner that is a combination of the fastest, most reliable, most durable, and least expensive way to store and retrieve information. Depending on the specific… …   Wikipedia

  • Memory management unit — This 68451 MMU could be used with the Motorola 68010 A memory management unit (MMU), sometimes called paged memory management unit (PMMU), is a computer hardware component responsible for handling accesses to memory requested by the CPU. Its… …   Wikipedia

  • Instruction level parallelism — (ILP) is a measure of how many of the operations in a computer program can be performed simultaneously. Consider the following program: 1. e = a + b 2. f = c + d 3. g = e * fOperation 3 depends on the results of operations 1 and 2, so it cannot… …   Wikipedia

  • Data parallelism — (also known as loop level parallelism) is a form of parallelization of computing across multiple processors in parallel computing environments. Data parallelism focuses on distributing the data across different parallel computing nodes. It… …   Wikipedia

  • Task parallelism — (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing execution processes (threads)… …   Wikipedia

  • Non-Uniform Memory Access — (NUMA) is a computer memory design used in Multiprocessing, where the memory access time depends on the memory location relative to a processor. Under NUMA, a processor can access its own local memory faster than non local memory, that is, memory …   Wikipedia

  • Distributed memory — An illustration of a distributed memory system of three computers In computer science, distributed memory refers to a multiple processor computer system in which each processor has its own private memory. Computational tasks can only operate on… …   Wikipedia


Поделиться ссылкой на выделенное

Прямая ссылка:
Нажмите правой клавишей мыши и выберите «Копировать ссылку»